Convergence of the BFGS Method for LC 1 Convex Constrained Optimization 1

نویسنده

  • Xiaojun Chen
چکیده

This paper proposes a BFGS-SQP method for linearly constrained optimization where the objective function f is only required to have a Lipschitz gradient. The KKT system of the problem is equivalent to a system of nonsmooth equations F(v) = 0. At every step a quasi-Newton matrix is updated if kF(v k)k satisses a rule. This method converges globally and the rate of convergence is su-perlinear when f is twice strongly diierentiable at a solution of the optimization problem. No assumptions on the constraints are required. This generalizes classical convergence theory of the BFGS method which requires a twice continuous diieren-tiability assumption on the objective function. Applications to stochastic programs with recourse are discussed on a CM5 parallel computer. Abbreviated title : BFGS method for LC 1 optimization

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modify the linear search formula in the BFGS method to achieve global convergence.

<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...

متن کامل

Improved Damped Quasi-Newton Methods for Unconstrained Optimization∗

Recently, Al-Baali (2014) has extended the damped-technique in the modified BFGS method of Powell (1978) for Lagrange constrained optimization functions to the Broyden family of quasi-Newton methods for unconstrained optimization. Appropriate choices for the damped-parameter, which maintain the global and superlinear convergence property of these methods on convex functions and correct the Hess...

متن کامل

Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions

Linear convergence rates of descent methods for unconstrained minimization are usually proven under the assumption that the objective function is strongly convex. Recently it was shown that the weaker assumption of restricted strong convexity suffices for linear convergence of the ordinary gradient descent method. A decisive difference to strong convexity is that the set of minimizers of a rest...

متن کامل

On the limited memory BFGS method for large scale optimization

We study the numerical performance of a limited memory quasi Newton method for large scale optimization which we call the L BFGS method We compare its performance with that of the method developed by Buckley and LeNir which combines cyles of BFGS steps and conjugate direction steps Our numerical tests indicate that the L BFGS method is faster than the method of Buckley and LeNir and is better a...

متن کامل

Convergence analysis of a modified BFGS method on convex minimizations

A modified BFGS method is proposed for unconstrained optimization. The global convergence and the superlinear convergence of the convex functions are established under suitable assumptions. Numerical results show that this method is interesting.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995